Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks

نویسندگان

چکیده

While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until forward backward passes are completed. Not only do these constraints preclude biological plausibility, but they also hinder development low-cost adaptive smart sensors at edge, as severely constrain memory accesses entail buffering overhead. In this work, we show that one-hot-encoded labels provided in supervised classification problems, denoted targets, can be viewed a proxy for sign. Therefore, their fixed random projections enable layerwise feedforward training hidden layers, thus solving problems while relaxing computational requirements. Based on observations, propose direct target projection (DRTP) demonstrate provides tradeoff between accuracy cost is suitable edge computing devices.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Random feedback weights support learning in deep neural networks

The brain processes information through many layers of neurons. This deep architecture is representationally powerful1,2,3,4, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made1,5. In machine learning, the backpropagation algorithm1 assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error...

متن کامل

Learning Stochastic Feedforward Neural Networks

Multilayer perceptrons (MLPs) or neural networks are popular models used for nonlinear regression and classification tasks. As regressors, MLPs model the conditional distribution of the predictor variables Y given the input variables X . However, this predictive distribution is assumed to be unimodal (e.g. Gaussian). For tasks involving structured prediction, the conditional distribution should...

متن کامل

Distributed learning algorithm for feedforward neural networks

With the appearance of huge data sets new challenges have risen regarding the scalability and efficiency of Machine Learning algorithms, and both distributed computing and randomized algorithms have become effective ways to handle them. Taking advantage of these two approaches, a distributed learning algorithm for two-layer neural networks is proposed. Results demonstrate a similar accuracy whe...

متن کامل

Random Walk Initialization for Training Very Deep Feedforward Networks

Training very deep networks is an important open problem in machine learning. One of many difficulties is that the norm of the back-propagated error gradient can grow or decay exponentially. Here we show that training very deep feed-forward networks (FFNs) is not as difficult as previously thought. Unlike when backpropagation is applied to a recurrent network, application to an FFN amounts to m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Frontiers in Neuroscience

سال: 2021

ISSN: ['1662-453X', '1662-4548']

DOI: https://doi.org/10.3389/fnins.2021.629892